Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
IEEE Transactions on Instrumentation and Measurement ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2296656

ABSTRACT

Recently, accurate segmentation of COVID-19 infection from computed tomography (CT) scans is critical for the diagnosis and treatment of COVID-19. However, infection segmentation is a challenging task due to various textures, sizes and locations of infections, low contrast, and blurred boundaries. To address these problems, we propose a novel Multi-scale Wavelet Guidance Network (MWG-Net) for COVID-19 lung infection by integrating the multi-scale information of wavelet domain into the encoder and decoder of the convolutional neural network (CNN). In particular, we propose the Wavelet Guidance Module (WGM) and Wavelet &Edge Guidance Module (WEGM). Among them, the WGM guides the encoder to extract infection details through the multi-scale spatial and frequency features in the wavelet domain, while the WEGM guides the decoder to recover infection details through the multi-scale wavelet representations and multi-scale infection edge information. Besides, a Progressive Fusion Module (PFM) is further developed to aggregate and explore multi-scale features of the encoder and decoder. Notably, we establish a COVID-19 segmentation dataset (named COVID-Seg-100) containing 5800+ annotated slices for performance evaluation. Furthermore, we conduct extensive experiments to compare our method with other state-of-the-art approaches on our COVID-19-Seg-100 and two publicly available datasets, i.e., MosMedData and COVID-SemiSeg. The results show that our MWG-Net outperforms state-of-the-art methods on different datasets and can achieve more accurate and promising COVID-19 lung infection segmentation. IEEE

2.
1st IEEE International Conference on Automation, Computing and Renewable Systems, ICACRS 2022 ; : 627-633, 2022.
Article in English | Scopus | ID: covidwho-2250295

ABSTRACT

The rapid spread of the disease after COVID-19's emergence in 2019 has presented enormous problems to medical institutions. The diagnosis process will go more rapidly if the infected region in the COVID-19 CT image can be automatically segmented, which will aid clinicians in promptly identifying the patient's illness. Automated lung infection identification using computed tomography scans is a more general approach. However, segmenting sick areas from CT slices is quite difficult. In this work, a diagnosis system based on deep learning methods is being created to identify and quantify COVID-19 infection and screen for pneumonia using CT imaging. Here, Unet++ approaches, U-net architecture based on CNN encoder and CNN decoder, and Attention Unet segmentation techniques are used. These methods are applied for quick and accurate picture segmentation to produce segmentation models for lung and infection. Fourfold cross-validation has been used as a re-sampling method to improve skill estimate on unseen data. To enable volume ratio calculating and determine infection rate, the lung and infection volumes have been reconfigured. 20 CT scan cases were used in this study, and the data were split into two, training dataset 70% and a validation dataset 30%. In this study with three architectures it shows that basic Unet performs well compared to other two architectures. © 2022 IEEE

3.
IEEE Transactions on Consumer Electronics ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-2052082

ABSTRACT

The coronavirus disease 2019 (COVID-19) continues to have a negative impact on healthcare systems around the world, though the vaccines have been developed and national vaccination coverage rate is steadily increasing. At the current stage, automatically segmenting the lung infection area from CT images is essential for the diagnosis and treatment of COVID-19. Thanks to the development of deep learning technology, some deep learning solutions for lung infection segmentation have been proposed. However, due to the scattered distribution, complex background interference and blurred boundaries, the accuracy and completeness of the existing models are still unsatisfactory. To this end, we propose a boundary guided semantic learning network (BSNet) in this paper. On the one hand, the dual-branch semantic enhancement module that combines the top-level semantic preservation and progressive semantic integration is designed to model the complementary relationship between different high-level features, thereby promoting the generation of more complete segmentation results. On the other hand, the mirror-symmetric boundary guidance module is proposed to accurately detect the boundaries of the lesion regions in a mirror-symmetric way. Experiments on the publicly available dataset demonstrate that our BSNet outperforms the existing state-of-the-art competitors and achieves a real-time inference speed of 44 FPS. The code and results of our BSNet can be found from the link of https://github.com/rmcong/BSNet. IEEE

4.
Biomed Signal Process Control ; 79: 104250, 2023 Jan.
Article in English | MEDLINE | ID: covidwho-2041602

ABSTRACT

Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher-student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.

5.
International Journal of Intelligent Engineering and Systems ; 15(5):535-547, 2022.
Article in English | Scopus | ID: covidwho-2026234

ABSTRACT

Corona virus disease 2019 (COVID-19) 's global pandemic has caused the world to face a health crisis. Automated detection of COVID-19 infection from computed tomography (CT-scan) images has improved healthcare for treating COVID-19. However, segmentation of infected areas on CT-scan images of the lungs faces several challenges: detailed infection characteristics and low contrast differences between CT scans of infected lungs. It has a low data scale with a doctor's statement because it is still a new case, with a lot of data with pseudo labels, while pseudo labels have a low confidence level and a high error rate. Therefore, using the data of 1600 pseudo label images and 50 doctor label images, we apply pseudo supervision as the core idea, mutual training between two different models with a dynamic loss function called dynamic mutual training (DMT). DMT will do multi-training on pseudo labels with doctor's labels to be trusted in area segmentation. The results obtained are the most superior value of 91.32% with a loss value of 0.19 dice score 0.23, IOU 0.781, precision 0.843, sensitivity 0.753, and specificity 0.845. We also compare our method with other segmentation methods such as UNET, which is highly preferred in terms of medical images, and mask RCNN, which shows the best method in terms of segmentation. This comparison indicates that DMT provides the best experimental incentive with a dice score value of 2-30%, superior to cases segmentation areas affected by COVID-19 on CT scans of the lungs. © 2022. International Journal of Intelligent Engineering and Systems.All Rights Reserved

6.
Front Med (Lausanne) ; 9: 940960, 2022.
Article in English | MEDLINE | ID: covidwho-2022771

ABSTRACT

With the onset of the COVID-19 pandemic, quantifying the condition of positively diagnosed patients is of paramount importance. Chest CT scans can be used to measure the severity of a lung infection and the isolate involvement sites in order to increase awareness of a patient's disease progression. In this work, we developed a deep learning framework for lung infection severity prediction. To this end, we collected a dataset of 232 chest CT scans and involved two public datasets with an additional 59 scans for our model's training and used two external test sets with 21 scans for evaluation. On an input chest Computer Tomography (CT) scan, our framework, in parallel, performs a lung lobe segmentation utilizing a pre-trained model and infection segmentation using three distinct trained SE-ResNet18 based U-Net models, one for each of the axial, coronal, and sagittal views. By having the lobe and infection segmentation masks, we calculate the infection severity percentage in each lobe and classify that percentage into 6 categories of infection severity score using a k-nearest neighbors (k-NN) model. The lobe segmentation model achieved a Dice Similarity Score (DSC) in the range of [0.918, 0.981] for different lung lobes and our infection segmentation models gained DSC scores of 0.7254 and 0.7105 on our two test sets, respectfully. Similarly, two resident radiologists were assigned the same infection segmentation tasks, for which they obtained a DSC score of 0.7281 and 0.6693 on the two test sets. At last, performance on infection severity score over the entire test datasets was calculated, for which the framework's resulted in a Mean Absolute Error (MAE) of 0.505 ± 0.029, while the resident radiologists' was 0.571 ± 0.039.

7.
Pol J Radiol ; 87: e478-e486, 2022.
Article in English | MEDLINE | ID: covidwho-2010449

ABSTRACT

Purpose: The novel coronavirus COVID-19, which spread globally in late December 2019, is a global health crisis. Chest computed tomography (CT) has played a pivotal role in providing useful information for clinicians to detect COVID-19. However, segmenting COVID-19-infected regions from chest CT results is challenging. Therefore, it is desirable to develop an efficient tool for automated segmentation of COVID-19 lesions using chest CT. Hence, we aimed to propose 2D deep-learning algorithms to automatically segment COVID-19-infected regions from chest CT slices and evaluate their performance. Material and methods: Herein, 3 known deep learning networks: U-Net, U-Net++, and Res-Unet, were trained from scratch for automated segmenting of COVID-19 lesions using chest CT images. The dataset consists of 20 labelled COVID-19 chest CT volumes. A total of 2112 images were used. The dataset was split into 80% for training and validation and 20% for testing the proposed models. Segmentation performance was assessed using Dice similarity coefficient, average symmetric surface distance (ASSD), mean absolute error (MAE), sensitivity, specificity, and precision. Results: All proposed models achieved good performance for COVID-19 lesion segmentation. Compared with Res-Unet, the U-Net and U-Net++ models provided better results, with a mean Dice value of 85.0%. Compared with all models, U-Net gained the highest segmentation performance, with 86.0% sensitivity and 2.22 mm ASSD. The U-Net model obtained 1%, 2%, and 0.66 mm improvement over the Res-Unet model in the Dice, sensitivity, and ASSD, respectively. Compared with Res-Unet, U-Net++ achieved 1%, 2%, 0.1 mm, and 0.23 mm improvement in the Dice, sensitivity, ASSD, and MAE, respectively. Conclusions: Our data indicated that the proposed models achieve an average Dice value greater than 84.0%. Two-dimensional deep learning models were able to accurately segment COVID-19 lesions from chest CT images, assisting the radiologists in faster screening and quantification of the lesion regions for further treatment. Nevertheless, further studies will be required to evaluate the clinical performance and robustness of the proposed models for COVID-19 semantic segmentation.

8.
Inf Sci (N Y) ; 612: 745-758, 2022 Oct.
Article in English | MEDLINE | ID: covidwho-2007772

ABSTRACT

Since the outbreak of Coronavirus Disease 2019 (COVID-19) in 2020, it has significantly affected the global health system. The use of deep learning technology to automatically segment pneumonia lesions from Computed Tomography (CT) images can greatly reduce the workload of physicians and expand traditional diagnostic methods. However, there are still some challenges to tackle the task, including obtaining high-quality annotations and subtle differences between classes. In the present study, a novel deep neural network based on Resnet architecture is proposed to automatically segment infected areas from CT images. To reduce the annotation cost, a Vector Quantized Variational AutoEncoder (VQ-VAE) branch is added to reconstruct the input images for purpose of regularizing the shared decoder and the latent maps of the VQ-VAE are utilized to further improve the feature representation. Moreover, a novel proportions loss is presented for mitigating class imbalance and enhance the generalization ability of the model. In addition, a semi-supervised mechanism based on adversarial learning to the network has been proposed, which can utilize the information of the trusted region in unlabeled images to further regularize the network. Extensive experiments on the COVID-SemiSeg are performed to verify the superiority of the proposed method, and the results are in line with expectations.

9.
Comput Biol Med ; 149: 106033, 2022 10.
Article in English | MEDLINE | ID: covidwho-2003990

ABSTRACT

Medical image segmentation is a key initial step in several therapeutic applications. While most of the automatic segmentation models are supervised, which require a well-annotated paired dataset, we introduce a novel annotation-free pipeline to perform segmentation of COVID-19 CT images. Our pipeline consists of three main subtasks: automatically generating a 3D pseudo-mask in self-supervised mode using a generative adversarial network (GAN), leveraging the quality of the pseudo-mask, and building a multi-objective segmentation model to predict lesions. Our proposed 3D GAN architecture removes infected regions from COVID-19 images and generates synthesized healthy images while keeping the 3D structure of the lung the same. Then, a 3D pseudo-mask is generated by subtracting the synthesized healthy images from the original COVID-19 CT images. We enhanced pseudo-masks using a contrastive learning approach to build a region-aware segmentation model to focus more on the infected area. The final segmentation model can be used to predict lesions in COVID-19 CT images without any manual annotation at the pixel level. We show that our approach outperforms the existing state-of-the-art unsupervised and weakly-supervised segmentation techniques on three datasets by a reasonable margin. Specifically, our method improves the segmentation results for the CT images with low infection by increasing sensitivity by 20% and the dice score up to 4%. The proposed pipeline overcomes some of the major limitations of existing unsupervised segmentation approaches and opens up a novel horizon for different applications of medical image segmentation.


Subject(s)
COVID-19 , Image Processing, Computer-Assisted , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Tomography, X-Ray Computed
10.
IEEE Transactions on Instrumentation and Measurement ; : 1-1, 2022.
Article in English | Scopus | ID: covidwho-1992679

ABSTRACT

The spread of COVID-19 has brought a huge disaster to the world, and the automatic segmentation of infection regions can help doctors to make diagnosis quickly and reduce workload. However, there are several challenges for the accurate and complete segmentation, such as the scattered infection area distribution, complex background noises, and blurred segmentation boundaries. To this end, in this paper, we propose a novel network for automatic COVID-19 lung infection segmentation from CT images, named BCS-Net, which considers the boundary, context, and semantic attributes. The BCS-Net follows an encoder-decoder architecture, and more designs focus on the decoder stage that includes three progressively Boundary- Context-Semantic Reconstruction (BCSR) blocks. In each BCSR block, the attention-guided global context (AGGC) module is designed to learn the most valuable encoder features for decoder by highlighting the important spatial and boundary locations and modeling the global context dependence. Besides, a semantic guidance (SG) unit generates the semantic guidance map to refine the decoder features by aggregating multi-scale high-level features at the intermediate resolution. Extensive experiments demonstrate that our proposed framework outperforms the existing competitors both qualitatively and quantitatively. IEEE

11.
Signal Process Image Commun ; 108: 116835, 2022 Oct.
Article in English | MEDLINE | ID: covidwho-1966640

ABSTRACT

Coronavirus Disease 2019 (COVID-19) has spread globally since the first case was reported in December 2019, becoming a world-wide existential health crisis with over 90 million total confirmed cases. Segmentation of lung infection from computed tomography (CT) scans via deep learning method has a great potential in assisting the diagnosis and healthcare for COVID-19. However, current deep learning methods for segmenting infection regions from lung CT images suffer from three problems: (1) Low differentiation of semantic features between the COVID-19 infection regions, other pneumonia regions and normal lung tissues; (2) High variation of visual characteristics between different COVID-19 cases or stages; (3) High difficulty in constraining the irregular boundaries of the COVID-19 infection regions. To solve these problems, a multi-input directional UNet (MID-UNet) is proposed to segment COVID-19 infections in lung CT images. For the input part of the network, we firstly propose an image blurry descriptor to reflect the texture characteristic of the infections. Then the original CT image, the image enhanced by the adaptive histogram equalization, the image filtered by the non-local means filter and the blurry feature map are adopted together as the input of the proposed network. For the structure of the network, we propose the directional convolution block (DCB) which consist of 4 directional convolution kernels. DCBs are applied on the short-cut connections to refine the extracted features before they are transferred to the de-convolution parts. Furthermore, we propose a contour loss based on local curvature histogram then combine it with the binary cross entropy (BCE) loss and the intersection over union (IOU) loss for better segmentation boundary constraint. Experimental results on the COVID-19-CT-Seg dataset demonstrate that our proposed MID-UNet provides superior performance over the state-of-the-art methods on segmenting COVID-19 infections from CT images.

12.
Pattern Recognit ; 131: 108826, 2022 Nov.
Article in English | MEDLINE | ID: covidwho-1946219

ABSTRACT

The devastating outbreak of Coronavirus Disease (COVID-19) cases in early 2020 led the world to face health crises. Subsequently, the exponential reproduction rate of COVID-19 disease can only be reduced by early diagnosis of COVID-19 infection cases correctly. The initial research findings reported that radiological examinations using CT and CXR modality have successfully reduced false negatives by RT-PCR test. This research study aims to develop an explainable diagnosis system for the detection and infection region quantification of COVID-19 disease. The existing research studies successfully explored deep learning approaches with higher performance measures but lacked generalization and interpretability for COVID-19 diagnosis. In this study, we address these issues by the Covid-MANet network, an automated end-to-end multi-task attention network that works for 5 classes in three stages for COVID-19 infection screening. The first stage of the Covid-MANet network localizes attention of the model to the relevant lungs region for disease recognition. The second stage of the Covid-MANet network differentiates COVID-19 cases from bacterial pneumonia, viral pneumonia, normal and tuberculosis cases, respectively. To improve the interpretation and explainability, three experiments have been conducted in exploration of the most coherent and appropriate classification approach. Moreover, the multi-scale attention model MA-DenseNet201 proposed for the classification of COVID-19 cases. The final stage of the Covid-MANet network quantifies the proportion of infection and severity of COVID-19 in the lungs. The COVID-19 cases are graded into more specific severity levels such as mild, moderate, severe, and critical as per the score assigned by the RALE scoring system. The MA-DenseNet201 classification model outperforms eight state-of-the-art CNN models, in terms of sensitivity and interpretation with lung localization network. The COVID-19 infection segmentation by UNet with DenseNet121 encoder achieves dice score of 86.15% outperforming UNet, UNet++, AttentionUNet, R2UNet, with VGG16, ResNet50 and DenseNet201 encoder. The proposed network not only classifies images based on the predicted label but also highlights the infection by segmentation/localization of model-focused regions to support explainable decisions. MA-DenseNet201 model with a segmentation-based cropping approach achieves maximum interpretation of 96% with COVID-19 sensitivity of 97.75%. Finally, based on class-varied sensitivity analysis Covid-MANet ensemble network of MA-DenseNet201, ResNet50 and MobileNet achieve 95.05% accuracy and 98.75% COVID-19 sensitivity. The proposed model is externally validated on an unseen dataset, yields 98.17% COVID-19 sensitivity.

13.
Applied Sciences (Switzerland) ; 12(10), 2022.
Article in English | Scopus | ID: covidwho-1875463

ABSTRACT

Background: Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is a global threat impacting the lives of millions of people worldwide. Automated detection of lung infections from Computed Tomography scans represents an excellent alternative;however, segmenting infected regions from CT slices encounters many challenges. Objective: Developing a diagnosis system based on deep learning techniques to detect and quantify COVID-19 infection and pneumonia screening using CT imaging. Method: Contrast Limited Adaptive Histogram Equalization pre-processing method was used to remove the noise and intensity in homogeneity. Black slices were also removed to crop only the region of interest containing the lungs. A U-net architecture, based on CNN encoder and CNN decoder approaches, is then introduced for a fast and precise image segmentation to obtain the lung and infection segmentation models. For better estimation of skill on unseen data, a fourfold cross-validation as a resampling procedure has been used. A three-layered CNN architecture, with additional fully connected layers followed by a Softmax layer, was used for classification. Lung and infection volumes have been reconstructed to allow volume ratio computing and obtain infection rate. Results: Starting with the 20 CT scan cases, data has been divided into 70% for the training dataset and 30% for the validation dataset. Experimental results demonstrated that the proposed system achieves a dice score of 0.98 and 0.91 for the lung and infection segmentation tasks, respectively, and an accuracy of 0.98 for the classification task. Conclusions: The proposed workflow aimed at obtaining good performances for the different system’s components, and at the same time, dealing with reduced datasets used for training. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.

14.
Visual Computer ; : 14, 2022.
Article in English | Web of Science | ID: covidwho-1694592

ABSTRACT

The coronavirus disease 2019 (COVID-19) epidemic has spread worldwide and the healthcare system is in crisis. Accurate, automated and rapid segmentation of COVID-19 lesion in computed tomography (CT) images can help doctors diagnose and provide prognostic information. However, the variety of lesions and small regions of early lesion complicate their segmentation. To solve these problems, we propose a new SAUNet++ model with squeeze excitation residual (SER) module and atrous spatial pyramid pooling (ASPP) module. The SER module can assign more weights to more important channels and mitigate the problem of gradient disappearance;the ASPP module can obtain context information by atrous convolution using various sampling rates. In addition, the generalized dice loss (GDL) can reduce the correlation between lesion size and dice loss, and is introduced to solve the problem of small regions segmentation of COVID-19 lesion. We collected multinational CT scan data from China, Italy and Russia and conducted extensive comparative and ablation studies. The experimental results demonstrated that our method outperforms state-of-the-art models and can effectively improve the accuracy of COVID-19 lesion segmentation on the dice similarity coefficient (our: 87.38% vs. U-Net++: 84.25%), sensitivity (our: 93.28% vs. U-Net++: 89.85%) and Hausdorff distance (our: 19.99 mm vs. U-Net++: 26.79 mm), respectively.

15.
10th International Workshop on Clinical Image-Based Procedures, CLIP 2021, 2nd MICCAI Workshop on Distributed and Collaborative Learning, DCL 2021, 1st MICCAI Workshop, LL-COVID19, 1st Secure and Privacy-Preserving Machine Learning for Medical Imaging Workshop and Tutorial, PPML 2021, held in conjunction with 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021 ; 12969 LNCS:88-97, 2021.
Article in English | Scopus | ID: covidwho-1565295

ABSTRACT

This paper proposes a segmentation method of infection regions in the lung from CT volumes of COVID-19 patients. COVID-19 spread worldwide, causing many infected patients and deaths. CT image-based diagnosis of COVID-19 can provide quick and accurate diagnosis results. An automated segmentation method of infection regions in the lung provides a quantitative criterion for diagnosis. Previous methods employ whole 2D image or 3D volume-based processes. Infection regions have a considerable variation in their sizes. Such processes easily miss small infection regions. Patch-based process is effective for segmenting small targets. However, selecting the appropriate patch size is difficult in infection region segmentation. We utilize the scale uncertainty among various receptive field sizes of a segmentation FCN to obtain infection regions. The receptive field sizes can be defined as the patch size and the resolution of volumes where patches are clipped from. This paper proposes an infection segmentation network (ISNet) that performs patch-based segmentation and a scale uncertainty-aware prediction aggregation method that refines the segmentation result. We design ISNet to segment infection regions that have various intensity values. ISNet has multiple encoding paths to process patch volumes normalized by multiple intensity ranges. We collect prediction results generated by ISNets having various receptive field sizes. Scale uncertainty among the prediction results is extracted by the prediction aggregation method. We use an aggregation FCN to generate a refined segmentation result considering scale uncertainty among the predictions. In our experiments using 199 chest CT volumes of COVID-19 cases, the prediction aggregation method improved the dice similarity score from 47.6% to 62.1%. © 2021, Springer Nature Switzerland AG.

16.
SN Comput Sci ; 3(1): 13, 2022.
Article in English | MEDLINE | ID: covidwho-1491546

ABSTRACT

The novelty of the COVID-19 Disease and the speed of spread, created colossal chaotic, impulse all the worldwide researchers to exploit all resources and capabilities to understand and analyze characteristics of the coronavirus in terms of spread ways and virus incubation time. For that, the existing medical features such as CT-scan and X-ray images are used. For example, CT-scan images can be used for the detection of lung infection. However, the quality of these images and infection characteristics limit the effectiveness of these features. Using artificial intelligence (AI) tools and computer vision algorithms, the accuracy of detection can be more accurate and can help to overcome these issues. In this paper, we propose a multi-task deep-learning-based method for lung infection segmentation on CT-scan images. Our proposed method starts by segmenting the lung regions that may be infected. Then, segmenting the infections in these regions. In addition, to perform a multi-class segmentation the proposed model is trained using the two-stream inputs. The multi-task learning used in this paper allows us to overcome the shortage of labeled data. In addition, the multi-input stream allows the model to learn from many features that can improve the results. To evaluate the proposed method, many metrics have been used including Sorensen-Dice similarity, Sensitivity, Specificity, Precision, and MAE metrics. As a result of experiments, the proposed method can segment lung infections with high performance even with the shortage of data and labeled images. In addition, comparing with the state-of-the-art method our method achieves good performance results. For example, the proposed method reached 78..6% for Dice, 71.1% for Sensitivity metric, 99.3% for Specificity 85.6% for Precision, and 0.062 for Mean Average Error metric, which demonstrates the effectiveness of the proposed method for lung infection segmentation.

17.
Comput Biol Med ; 139: 105002, 2021 12.
Article in English | MEDLINE | ID: covidwho-1487672

ABSTRACT

The immense spread of coronavirus disease 2019 (COVID-19) has left healthcare systems incapable to diagnose and test patients at the required rate. Given the effects of COVID-19 on pulmonary tissues, chest radiographic imaging has become a necessity for screening and monitoring the disease. Numerous studies have proposed Deep Learning approaches for the automatic diagnosis of COVID-19. Although these methods achieved outstanding performance in detection, they have used limited chest X-ray (CXR) repositories for evaluation, usually with a few hundred COVID-19 CXR images only. Thus, such data scarcity prevents reliable evaluation of Deep Learning models with the potential of overfitting. In addition, most studies showed no or limited capability in infection localization and severity grading of COVID-19 pneumonia. In this study, we address this urgent need by proposing a systematic and unified approach for lung segmentation and COVID-19 localization with infection quantification from CXR images. To accomplish this, we have constructed the largest benchmark dataset with 33,920 CXR images, including 11,956 COVID-19 samples, where the annotation of ground-truth lung segmentation masks is performed on CXRs by an elegant human-machine collaborative approach. An extensive set of experiments was performed using the state-of-the-art segmentation networks, U-Net, U-Net++, and Feature Pyramid Networks (FPN). The developed network, after an iterative process, reached a superior performance for lung region segmentation with Intersection over Union (IoU) of 96.11% and Dice Similarity Coefficient (DSC) of 97.99%. Furthermore, COVID-19 infections of various shapes and types were reliably localized with 83.05% IoU and 88.21% DSC. Finally, the proposed approach has achieved an outstanding COVID-19 detection performance with both sensitivity and specificity values above 99%.


Subject(s)
COVID-19 , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Thorax , X-Rays
18.
Diagnostics (Basel) ; 11(11)2021 Oct 20.
Article in English | MEDLINE | ID: covidwho-1480630

ABSTRACT

(1) Background: COVID-19 has been global epidemic. This work aims to extract 3D infection from COVID-19 CT images; (2) Methods: Firstly, COVID-19 CT images are processed with lung region extraction and data enhancement. In this strategy, gradient changes of voxels in different directions respond to geometric characteristics. Due to the complexity of tubular tissues in lung region, they are clustered to the lung parenchyma center based on their filtered possibility. Thus, infection is improved after data enhancement. Then, deep weighted UNet is established to refining 3D infection texture, and weighted loss function is introduced. It changes cost calculation of different samples, causing target samples to dominate convergence direction. Finally, the trained network effectively extracts 3D infection from CT images by adjusting driving strategy of different samples. (3) Results: Using Accuracy, Precision, Recall and Coincidence rate, 20 subjects from a private dataset and eight subjects from Kaggle Competition COVID-19 CT dataset tested this method in hold-out validation framework. This work achieved good performance both in the private dataset (99.94-00.02%, 60.42-11.25%, 70.79-09.35% and 63.15-08.35%) and public dataset (99.73-00.12%, 77.02-06.06%, 41.23-08.61% and 52.50-08.18%). We also applied some extra indicators to test data augmentation and different models. The statistical tests have verified the significant difference of different models. (4) Conclusions: This study provides a COVID-19 infection segmentation technology, which provides an important prerequisite for the quantitative analysis of COVID-19 CT images.

19.
Appl Soft Comput ; 113: 107947, 2021 Dec.
Article in English | MEDLINE | ID: covidwho-1466058

ABSTRACT

COVID-19 infection segmentation has essential applications in determining the severity of a COVID-19 patient and can provide a necessary basis for doctors to adopt a treatment scheme. However, in clinical applications, infection segmentation is performed by human beings, which is time-consuming and generally introduces bias. In this paper, we developed a novel evolvable adversarial framework for COVID-19 infection segmentation. Three generator networks compose an evolutionary population to accommodate the current discriminator, i.e., generator networks evolved with different mutations instead of the single adversarial objective to provide sufficient gradient feedback. Compared with the existing work that enforces a Lipschitz constraint by weight clipping, which may lead to gradient exploding or vanishing, the proposed model also incorporates the gradient penalty into the network, penalizing the discriminator's gradient norm input. Experiments on several COVID-19 CT scan datasets verified that the proposed method achieved superior effectiveness and stability for COVID-19 infection segmentation.

20.
J Pers Med ; 11(10)2021 Oct 07.
Article in English | MEDLINE | ID: covidwho-1463734

ABSTRACT

BACKGROUND: Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is a time-consuming process particularly in case of trivial lesions and requires medical specialists. METHOD: A recent breakthrough in deep learning methods has boosted the diagnostic capability of computer-aided diagnosis (CAD) systems and further aided health professionals in making effective diagnostic decisions. In this study, we propose a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans. Our network design achieves a fast execution speed (inference time is 43 ms on a single image) with optimal memory consumption (almost 9 MB). To evaluate the performances of the proposed and state-of-the-art models, we considered two publicly accessible datasets, namely COVID-19-CT-Seg (comprising a total of 3520 images of 20 different patients) and MosMed (including a total of 2049 images of 50 different patients). RESULTS: Our method exhibits average area under the curve (AUC) up to 98.84%, 98.47%, and 95.51% for COVID-19-CT-Seg, MosMed, and cross-dataset, respectively, and outperforms various state-of-the-art methods. CONCLUSIONS: These results demonstrate that deep learning-based models are an effective tool for building a robust CAD solution based on CT data in response to present disaster of COVID-19.

SELECTION OF CITATIONS
SEARCH DETAIL